Goto

Collaborating Authors

 review length


How to Train Your Advisor: Steering Black-Box LLMs with Advisor Models

Asawa, Parth, Zhu, Alan, Zaharia, Matei, Dimakis, Alexandros G., Gonzalez, Joseph E.

arXiv.org Artificial Intelligence

Foundation models are increasingly deployed as black-box services, where model weights cannot be modified and customization is limited to prompting. While static prompt optimization has shown promise, it produces a single fixed prompt that fails to adapt to different inputs, users, or environments. We introduce Advisor Models, lightweight parametric policies trained with reinforcement learning to reactively issue natural language steering instructions in-context to black-box models. The advisor is a second small model that sits between the input and the model, shaping behavior on a per-instance basis using reward signals from the environment. Across multiple domains involving reasoning and personalization, we show that Advisor Models outperform static prompt optimizers, discovering environment dynamics and improving downstream task performance. We also demonstrate the generalizability of advisors by transferring them across black-box models, as well as the framework's ability to achieve specialization while retaining robustness to out-of-distribution inputs. Viewed more broadly, Advisor Models provide a learnable interface to black-box systems where the advisor acts as a parametric, environment-specific memory. We argue that dynamic optimization of black-box models via Advisor Models is a promising direction for enabling personalization and environment-adaptable AI with frontier-level capabilities.


Can LLM feedback enhance review quality? A randomized study of 20K reviews at ICLR 2025

Thakkar, Nitya, Yuksekgonul, Mert, Silberg, Jake, Garg, Animesh, Peng, Nanyun, Sha, Fei, Yu, Rose, Vondrick, Carl, Zou, James

arXiv.org Artificial Intelligence

Peer review at AI conferences is stressed by rapidly rising submission volumes, leading to deteriorating review quality and increased author dissatisfaction. To address these issues, we developed Review Feedback Agent, a system leveraging multiple large language models (LLMs) to improve review clarity and actionability by providing automated feedback on vague comments, content misunderstandings, and unprofessional remarks to reviewers. Implemented at ICLR 2025 as a large randomized control study, our system provided optional feedback to more than 20,000 randomly selected reviews. To ensure high-quality feedback for reviewers at this scale, we also developed a suite of automated reliability tests powered by LLMs that acted as guardrails to ensure feedback quality, with feedback only being sent to reviewers if it passed all the tests. The results show that 27% of reviewers who received feedback updated their reviews, and over 12,000 feedback suggestions from the agent were incorporated by those reviewers. This suggests that many reviewers found the AI-generated feedback sufficiently helpful to merit updating their reviews. Incorporating AI feedback led to significantly longer reviews (an average increase of 80 words among those who updated after receiving feedback) and more informative reviews, as evaluated by blinded researchers. Moreover, reviewers who were selected to receive AI feedback were also more engaged during paper rebuttals, as seen in longer author-reviewer discussions. This work demonstrates that carefully designed LLM-generated review feedback can enhance peer review quality by making reviews more specific and actionable while increasing engagement between reviewers and authors. The Review Feedback Agent is publicly available at https://github.com/zou-group/review_feedback_agent.


What factors influence the popularity of user-generated text in the creative domain? A case study of book reviews

Sazzed, Salim

arXiv.org Artificial Intelligence

This study investigates a range of psychological, lexical, semantic, and readability features of book reviews to elucidate the factors underlying their perceived popularity. To this end, we conduct statistical analyses of various features, including the types and frequency of opinion and emotion-conveying terms, connectives, character mentions, word uniqueness, commonness, and sentence structure, among others. Additionally, we utilize two readability tests to explore whether reading ease is positively associated with review popularity. Finally, we employ traditional machine learning classifiers and transformer-based fine-tuned language models with n-gram features to automatically determine review popularity. Our findings indicate that, with the exception of a few features (e.g., review length, emotions, and word uniqueness), most attributes do not exhibit significant differences between popular and non-popular review groups. Furthermore, the poor performance of machine learning classifiers using the word n-gram feature highlights the challenges associated with determining popularity in creative domains. Overall, our study provides insights into the factors underlying review popularity and highlights the need for further research in this area, particularly in the creative realm.


Review Helpfulness Assessment based on Convolutional Neural Network

Qu, Xianshan, Li, Xiaopeng, Rose, John R.

arXiv.org Artificial Intelligence

In this paper we describe the implementation of a convolutional neural network (CNN) used to assess online review helpfulness. To our knowledge, this is the first use of this architecture to address this problem. We explore the impact of two related factors impacting CNN performance: different word embedding initializations and different input review lengths. We also propose an approach to combining rating star information with review text to further improve prediction accuracy. We demonstrate that this can improve the overall accuracy by 2%. Finally, we evaluate the method on a benchmark dataset and show an improvement in accuracy relative to published results for traditional methods of 2.5% for a model trained using only review text and 4.24% for a model trained on a combination of rating star information and review text.


A Beginner's Guide on Sentiment Analysis with RNN – Towards Data Science

@machinelearnbot

In order to feed this data into our RNN, all input documents must have the same length. We will limit the maximum review length to max_words by truncating longer reviews and padding shorter reviews with a null value (0). We can accomplish this using the pad_sequences() function in Keras. For now, set max_words to 500. We start building our model architecture in the code cell below.